Search Results: "sun"

9 January 2024

Louis-Philippe V ronneau: 2023 A Musical Retrospective

I ended 2022 with a musical retrospective and very much enjoyed writing that blog post. As such, I have decided to do the same for 2023! From now on, this will probably be an annual thing :) Albums In 2023, I added 73 new albums to my collection nearly 2 albums every three weeks! I listed them below in the order in which I acquired them. I purchased most of these albums when I could and borrowed the rest at libraries. If you want to browse though, I added links to the album covers pointing either to websites where you can buy them or to Discogs when digital copies weren't available. Once again this year, it seems that Punk (mostly O !) and Metal dominate my list, mostly fueled by Angry Metal Guy and the amazing Montr al Skinhead/Punk concert scene. Concerts A trend I started in 2022 was to go to as many concerts of artists I like as possible. I'm happy to report I went to around 80% more concerts in 2023 than in 2022! Looking back at my list, April was quite a busy month... Here are the concerts I went to in 2023: Although metalfinder continues to work as intended, I'm very glad to have discovered the Montr al underground scene has departed from Facebook/Instagram and adopted en masse Gancio, a FOSS community agenda that supports ActivityPub. Our local instance, askapunk.net is pretty much all I could ask for :) That's it for 2023!

3 January 2024

John Goerzen: Consider Security First

I write this in the context of my decision to ditch Raspberry Pi OS and move everything I possibly can, including my Raspberry Pi devices, to Debian. I will write about that later. But for now, I wanted to comment on something I think is often overlooked and misunderstood by people considering distributions or operating systems: the huge importance of getting security updates in an automated and easy way.

Background Let s assume that these statements are true, which I think are well-supported by available evidence:
  1. Every computer system (OS plus applications) that can do useful modern work has security vulnerabilities, some of which are unknown at any given point in time;
  2. During the lifetime of that computer system, some of these vulnerabilities will be discovered. For a (hopefully large) subset of those vulnerabilities, timely patches will become available.
Now then, it follows that applying those timely patches is a critical part of having a system that it as secure as possible. Of course, you have to do other things as well good passwords, secure practices, etc but, fundamentally, if your system lacks patches for known vulnerabilities, you ve already lost at the security ballgame.

How to stay patched There is something of a continuum of how you might patch your system. It runs roughly like this, from best to worst:
  1. All components are kept up-to-date automatically, with no intervention from the user/operator
  2. The operator is automatically alerted to necessary patches, and they can be easily installed with minimal intervention
  3. The operator is automatically alerted to necessary patches, but they require significant effort to apply
  4. The operator has no way to detect vulnerabilities or necessary patches
It should be obvious that the first situation is ideal. Every other situation relies on the timeliness of human action to keep up-to-date with security patches. This is a fallible situation; humans are busy, take trips, dismiss alerts, miss alerts, etc. That said, it is rare to find any system living truly all the way in that scenario, as you ll see.

What is your system ? A critical point here is: what is your system ? It includes:
  • Your kernel
  • Your base operating system
  • Your applications
  • All the libraries needed to run all of the above
Some OSs, such as Debian, make little or no distinction between the base OS and the applications. Others, such as many BSDs, have a distinction there. And in some cases, people will compile or install applications outside of any OS mechanism. (It must be stressed that by doing so, you are taking the responsibility of patching them on your own shoulders.)

How do common systems stack up?
  • Debian, with its support for unattended-upgrades, needrestart, debian-security-support, and such, is largely category 1. It can automatically apply security patches, in most cases can restart the necessary services for the patch to take effect, and will alert you when some processes or the system must be manually restarted for a patch to take effect (for instance, a kernel update). Those cases requiring manual intervention are category 2. The debian-security-support package will even warn you of gaps in the system. You can also use debsecan to scan for known vulnerabilities on a given installation.
  • FreeBSD has no way to automatically install security patches for things in the packages collection. As with many rolling-release systems, you can t automate the installation of these security patches with FreeBSD because it is not safe to blindly update packages. It s not safe to blindly update packages because they may bring along more than just security patches: they may represent major upgrades that introduce incompatibilities, etc. Unlike Debian s practice of backporting fixes and thus producing narrowly-tailored patches, forcing upgrades to newer versions precludes a minimal intervention install. Therefore, rolling release systems are category 3.
  • Things such as Snap, Flatpak, AppImage, Docker containers, Electron apps, and third-party binaries often contain embedded libraries and such for which you have no easy visibility into their status. For instance, if there was a bug in libpng, would you know how many of your containers had a vulnerability? These systems are category 4 you don t even know if you re vulnerable. It s for this reason that my Debian-based Docker containers apply security patches before starting processes, and also run unattended-upgrades and friends.

The pernicious library problem As mentioned in my last category above, hidden vulnerabilities can be a big problem. I ve been writing about this for years. Back in 2017, I wrote an article focused on Docker containers, but which applies to the other systems like Snap and so forth. I cited a study back then that Over 80% of the :latest versions of official images contained at least one high severity vulnerability. The situation is no better now. In December 2023, it was reported that, two years after the critical Log4Shell vulnerability, 25% of apps were still vulnerable to it. Also, only 21% of developers ever update third-party libraries after introducing them into their projects. Clearly, you can t rely on these images with embedded libraries to be secure. And since they are black box, they are difficult to audit. Debian s policy of always splitting libraries out from packages is hugely beneficial; it allows finegrained analysis of not just vulnerabilities, but also the dependency graph. If there s a vulnerability in libpng, you have one place to patch it and you also know exactly what components of your system use it. If you use snaps, or AppImages, you can t know if they contain a deeply embedded vulnerability, nor could you patch it yourself if you even knew. You are at the mercy of upstream detecting and remedying the problem a dicey situation at best.

Who makes the patches? Fundamentally, humans produce security patches. Often, but not always, patches originate with the authors of a program and then are integrated into distribution packages. It should be noted that every security team has finite resources; there will always be some CVEs that aren t patched in a given system for various reasons; perhaps they are not exploitable, or are too low-impact, or have better mitigations than patches. Debian has an excellent security team; they manage the process of integrating patches into Debian, produce Debian Security Advisories, maintain the Debian Security Tracker (which maintains cross-references with the CVE database), etc. Some distributions don t have this infrastructure. For instance, I was unable to find this kind of tracker for Devuan or Raspberry Pi OS. In contrast, Ubuntu and Arch Linux both seem to have active security teams with trackers and advisories.

Implications for Raspberry Pi OS and others As I mentioned above, I m transitioning my Pi devices off Raspberry Pi OS (Raspbian). Security is one reason. Although Raspbian is a fork of Debian, and you can install packages like unattended-upgrades on it, they don t work right because they use the Debian infrastructure, and Raspbian hasn t modified them to use their own infrastructure. I don t see any Raspberry Pi OS security advisories, trackers, etc. In short, they lack the infrastructure to support those Debian tools anyhow. Not only that, but Raspbian lags behind Debian in both new releases and new security patches, sometimes by days or weeks. Live Migrating from Raspberry Pi OS bullseye to Debian bookworm contains instructions for migrating Raspberry Pis to Debian.

31 December 2023

Chris Lamb: Favourites of 2023

This post should have marked the beginning of my yearly roundups of the favourite books and movies I read and watched in 2023. However, due to coming down with a nasty bout of flu recently and other sundry commitments, I wasn't able to undertake writing the necessary four or five blog posts In lieu of this, however, I will simply present my (unordered and unadorned) highlights for now. Do get in touch if this (or any of my previous posts) have spurred you into picking something up yourself

Books

Peter Watts: Blindsight (2006) Reymer Banham: Los Angeles: The Architecture of Four Ecologies (2006) Joanne McNeil: Lurking: How a Person Became a User (2020) J. L. Carr: A Month in the Country (1980) Hilary Mantel: A Memoir of My Former Self: A Life in Writing (2023) Adam Higginbotham: Midnight in Chernobyl (2019) Tony Judt: Postwar: A History of Europe Since 1945 (2005) Tony Judt: Reappraisals: Reflections on the Forgotten Twentieth Century (2008) Peter Apps: Show Me the Bodies: How We Let Grenfell Happen (2021) Joan Didion: Slouching Towards Bethlehem (1968)Erik Larson: The Devil in the White City (2003)

Films Recent releases

Unenjoyable experiences included Alejandro G mez Monteverde's Sound of Freedom (2023), Alex Garland's Men (2022) and Steven Spielberg's The Fabelmans (2022).
Older releases (Films released before 2022, and not including rewatches from previous years.) Distinctly unenjoyable watches included Ocean's Eleven (1960), El Topo (1970), L olo (1992), Hotel Mumbai (2018), Bulworth (1998) and and The Big Red One (1980).

27 December 2023

Russ Allbery: Review: A Study in Scarlet

Review: A Study in Scarlet, by Arthur Conan Doyle
Series: Sherlock Holmes #1
Publisher: AmazonClassics
Copyright: 1887
Printing: February 2018
ISBN: 1-5039-5525-7
Format: Kindle
Pages: 159
A Study in Scarlet is the short mystery novel (probably a novella, although I didn't count words) that introduced the world to Sherlock Holmes. I'm going to invoke the 100-year-rule and discuss the plot of this book rather freely on the grounds that even someone who (like me prior to a few days ago) has not yet read it is probably not that invested in avoiding all spoilers. If you do want to remain entirely unspoiled, exercise caution before reading on. I had somehow managed to avoid ever reading anything by Arthur Conan Doyle, not even a short story. I therefore couldn't be sure that some of the assertions I was making in my review of A Study in Honor were correct. Since A Study in Scarlet would be quick to read, I decided on a whim to do a bit of research and grab a free copy of the first Holmes novel. Holmes is such a part of English-speaking culture that I thought I had a pretty good idea of what to expect. This was largely true, but cultural osmosis had somehow not prepared me for the surprise Mormons. A Study in Scarlet establishes the basic parameters of a Holmes story: Dr. James Watson as narrator, the apartment he shares with Holmes at 221B Baker Street, the Baker Street Irregulars, Holmes's competition with police detectives, and his penchant for making leaps of logical deduction from subtle clues. The story opens with Watson meeting Holmes, agreeing to split the rent of a flat, and being baffled by the apparent randomness of Holmes's fields of study before Holmes reveals he's a consulting detective. The first case is a murder: a man is found dead in an abandoned house, without a mark on him although there are blood splatters on the walls and the word "RACHE" written in blood. Since my only prior exposure to Holmes was from cultural references and a few TV adaptations, there were a few things that surprised me. One is that Holmes is voluble and animated rather than aloof. Doyle is clearly going for passionate eccentric rather than calculating mastermind. Another is that he is intentionally and unabashedly ignorant on any topic not related to solving mysteries.
My surprise reached a climax, however, when I found incidentally that he was ignorant of the Copernican Theory and of the composition of the Solar System. That any civilized human being in this nineteenth century should not be aware that the earth travelled round the sun appeared to be to me such an extraordinary fact that I could hardly realize it. "You appear to be astonished," he said, smiling at my expression of surprise. "Now that I do know it I shall do my best to forget it." "To forget it!" "You see," he explained, "I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you chose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things so that he has a difficulty in laying his hands upon it. Now the skilful workman is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order. It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones."
This is directly contrary to my expectation that the best way to make leaps of deduction is to know something about a huge range of topics so that one can draw unexpected connections, particularly given the puzzle-box construction and odd details so beloved in classic mysteries. I'm now curious if Doyle stuck with this conception, and if there were any later mysteries that involved astronomy. Speaking of classic mysteries, A Study in Scarlet isn't quite one, although one can see the shape of the genre to come. Doyle does not "play fair" by the rules that have not yet been invented. Holmes at most points knows considerably more than the reader, including bits of evidence that are not described until Holmes describes them and research that Holmes does off-camera and only reveals when he wants to be dramatic. This is not the sort of story where the reader is encouraged to try to figure out the mystery before the detective. Rather, what Doyle seems to be aiming for, and what Watson attempts (unsuccessfully) as the reader surrogate, is slightly different: once Holmes makes one of his grand assertions, the reader is encouraged to guess what Holmes might have done to arrive at that conclusion. Doyle seems to want the reader to guess technique rather than outcome, while providing only vague clues in general descriptions of Holmes's behavior at a crime scene. The structure of this story is quite odd. The first part is roughly what you would expect: first-person narration from Watson, supposedly taken from his journals but not at all in the style of a journal and explicitly written for an audience. Part one concludes with Holmes capturing and dramatically announcing the name of the killer, who the reader has never heard of before. Part two then opens with... a western?
In the central portion of the great North American Continent there lies an arid and repulsive desert, which for many a long year served as a barrier against the advance of civilization. From the Sierra Nevada to Nebraska, and from the Yellowstone River in the north to the Colorado upon the south, is a region of desolation and silence. Nor is Nature always in one mood throughout the grim district. It comprises snow-capped and lofty mountains, and dark and gloomy valleys. There are swift-flowing rivers which dash through jagged ca ons; and there are enormous plains, which in winter are white with snow, and in summer are grey with the saline alkali dust. They all preserve, however, the common characteristics of barrenness, inhospitality, and misery.
First, I have issues with the geography. That region contains some of the most beautiful areas on earth, and while a lot of that region is arid, describing it primarily as a repulsive desert is a bit much. Doyle's boundaries and distances are also confusing: the Yellowstone is a northeast-flowing river with its source in Wyoming, so the area between it and the Colorado does not extend to the Sierra Nevadas (or even to Utah), and it's not entirely clear to me that he realizes Nevada exists. This is probably what it's like for people who live anywhere else in the world when US authors write about their country. But second, there's no Holmes, no Watson, and not even the pretense of a transition from the detective novel that we were just reading. Doyle just launches into a random western with an omniscient narrator. It features a lean, grizzled man and an adorable child that he adopts and raises into a beautiful free spirit, who then falls in love with a wild gold-rush adventurer. This was written about 15 years before the first critically recognized western novel, so I can't blame Doyle for all the cliches here, but to a modern reader all of these characters are straight from central casting. Well, except for the villains, who are the Mormons. By that, I don't mean that the villains are Mormon. I mean Brigham Young is the on-page villain, plotting against the hero to force his adopted daughter into a Mormon harem (to use the word that Doyle uses repeatedly) and ruling Salt Lake City with an iron hand, border guards with passwords (?!), and secret police. This part of the book was wild. I was laughing out-loud at the sheer malevolent absurdity of the thirty-day countdown to marriage, which I doubt was the intended effect. We do eventually learn that this is the backstory of the murder, but we don't return to Watson and Holmes for multiple chapters. Which leads me to the other thing that surprised me: Doyle lays out this backstory, but then never has his characters comment directly on the morality of it, only the spectacle. Holmes cares only for the intellectual challenge (and for who gets credit), and Doyle sets things up so that the reader need not concern themselves with aftermath, punishment, or anything of that sort. I probably shouldn't have been surprised this does fit with the Holmes stereotype but I'm used to modern fiction where there is usually at least some effort to pass judgment on the events of the story. Doyle draws very clear villains, but is utterly silent on whether the murder is justified. Given its status in the history of literature, I'm not sorry to have read this book, but I didn't particularly enjoy it. It is very much of its time: everyone's moral character is linked directly to their physical appearance, and Doyle uses the occasional racial stereotype without a second thought. Prevailing writing styles have changed, so the prose feels long-winded and breathless. The rivalry between Holmes and the police detectives is tedious and annoying. I also find it hard to read novels from before the general absorption of techniques of emotional realism and interiority into all genres. The characters in A Study in Scarlet felt more like cartoon characters than fully-realized human beings. I have no strong opinion about the objective merits of this book in the context of its time other than to note that the sudden inserted western felt very weird. My understanding is that this is not considered one of the better Holmes stories, and Holmes gets some deeper characterization later on. Maybe I'll try another of Doyle's works someday, but for now my curiosity has been sated. Followed by The Sign of the Four. Rating: 4 out of 10

3 December 2023

Ben Hutchings: FOSS activity in November 2023

Iustin Pop: Life, getting sick and unfit

Like clockwork, like every autumn, got sick again. Just a flu, actually probably two in a row. I don t really understand it - from January to September I m feeling really awesome, and I manage to do sports five days a week, or more. Then September comes, and things start degrading, and then October or November, I get sick and it takes me ~3 weeks to recover, during which I m not even managing reliably 5K steps a day (from walking), not even talking about running or swimming or biking. My Garmin statistics for November compared to October (which wasn t a good month either) are depressing. Let s not even bring up for example July I can see three potential pathways: Basically all three pathways are similar, just not sure what s the root cause (vs. a trigger). I just did a D test, and despite taking regularly supplements, it was below normal range. So clearly somewhere there, lack of vitamin D is a problem. I probably should do a yearly check-up at end of September? Sigh, I still haven t found the detailed user manual for Homo Sapiens. If anyone has it, specifically for version late 1900 s, I d be thankful. Till next time, hopefully by then things will get better.

26 November 2023

Andrew Cater: MiniDebConf Cambridge - 26th November 2023 - Afternoon sessions

That's all folks ...
Sadly, nothing too much to report.I delivered a very quick three slides lightning talk on Accessibility, WCAG [Web Content Accessibility Guidelines] version 2.2 and a request for Debian to do better

WCAG 2.2: WCAG 2.2 AbstractDebian-accessibility mailing list link: debian-accessibilityI watched the other lightning talks but then left at 1500 - missing three good talks - to drive home at least partly in daylight.

A great four days - the chance to put some names to faces and to recharge in Debian spaces.

Thanks to all involved and especially ...
Thanks to Cambridge Debian folk for helping arrange evening meals, lifts and so on and especially to those who also happen be ARM employees who were badging us in and out through the four days

Thanks to those who staffed Front Desk on both days and, especially, also to the ARM security guards who let us into site at 0745 on all four days and to Mark who did the weekend shift inside the building for Saturday and Sunday.

Thanks to ARM for excellent facilities, food, coffee, hosting us and coffee, to Codethink for sponsoring - and a lecture from Sudip and some interesting hardware - and Pexip for Pexip sponsorship (and employee attendance).

Here's to the next opportunity, whenever that may be.

25 November 2023

Andrew Cater: Mini-DebCamp ARM Cambridge day 2

Another really good day at ARM. Still lots of coffee and good food - supplemented by a cooked breakfast if you were early enough :)

Lots of small groups of people working earnestly in the main lecture theatre and a couple of meeting rooms and the soft seating area: various folk arriving ready for tomorrow. Video team setting up in the afternoon and running up servers and cabling - all ready for a full schedule tomorrow and Sunday.Many thanks to our sponsors - and especially the helpful staff at ARM who were helping us in and out, sorting out meeting rooms and generally coping with a Debian invasion. More people tomorrow for the weekend.

21 November 2023

Mike Hommey: How I (kind of) killed Mercurial at Mozilla

Did you hear the news? Firefox development is moving from Mercurial to Git. While the decision is far from being mine, and I was barely involved in the small incremental changes that ultimately led to this decision, I feel I have to take at least some responsibility. And if you are one of those who would rather use Mercurial than Git, you may direct all your ire at me. But let's take a step back and review the past 25 years leading to this decision. You'll forgive me for skipping some details and any possible inaccuracies. This is already a long post, while I could have been more thorough, even I think that would have been too much. This is also not an official Mozilla position, only my personal perception and recollection as someone who was involved at times, but mostly an observer from a distance. From CVS to DVCS From its release in 1998, the Mozilla source code was kept in a CVS repository. If you're too young to know what CVS is, let's just say it's an old school version control system, with its set of problems. Back then, it was mostly ubiquitous in the Open Source world, as far as I remember. In the early 2000s, the Subversion version control system gained some traction, solving some of the problems that came with CVS. Incidentally, Subversion was created by Jim Blandy, who now works at Mozilla on completely unrelated matters. In the same period, the Linux kernel development moved from CVS to Bitkeeper, which was more suitable to the distributed nature of the Linux community. BitKeeper had its own problem, though: it was the opposite of Open Source, but for most pragmatic people, it wasn't a real concern because free access was provided. Until it became a problem: someone at OSDL developed an alternative client to BitKeeper, and licenses of BitKeeper were rescinded for OSDL members, including Linus Torvalds (they were even prohibited from purchasing one). Following this fiasco, in April 2005, two weeks from each other, both Git and Mercurial were born. The former was created by Linus Torvalds himself, while the latter was developed by Olivia Mackall, who was a Linux kernel developer back then. And because they both came out of the same community for the same needs, and the same shared experience with BitKeeper, they both were similar distributed version control systems. Interestingly enough, several other DVCSes existed: In this landscape, the major difference Git was making at the time was that it was blazing fast. Almost incredibly so, at least on Linux systems. That was less true on other platforms (especially Windows). It was a game-changer for handling large codebases in a smooth manner. Anyways, two years later, in 2007, Mozilla decided to move its source code not to Bzr, not to Git, not to Subversion (which, yes, was a contender), but to Mercurial. The decision "process" was laid down in two rather colorful blog posts. My memory is a bit fuzzy, but I don't recall that it was a particularly controversial choice. All of those DVCSes were still young, and there was no definite "winner" yet (GitHub hadn't even been founded). It made the most sense for Mozilla back then, mainly because the Git experience on Windows still wasn't there, and that mattered a lot for Mozilla, with its diverse platform support. As a contributor, I didn't think much of it, although to be fair, at the time, I was mostly consuming the source tarballs. Personal preferences Digging through my archives, I've unearthed a forgotten chapter: I did end up setting up both a Mercurial and a Git mirror of the Firefox source repository on alioth.debian.org. Alioth.debian.org was a FusionForge-based collaboration system for Debian developers, similar to SourceForge. It was the ancestor of salsa.debian.org. I used those mirrors for the Debian packaging of Firefox (cough cough Iceweasel). The Git mirror was created with hg-fast-export, and the Mercurial mirror was only a necessary step in the process. By that time, I had converted my Subversion repositories to Git, and switched off SVK. Incidentally, I started contributing to Git around that time as well. I apparently did this not too long after Mozilla switched to Mercurial. As a Linux user, I think I just wanted the speed that Mercurial was not providing. Not that Mercurial was that slow, but the difference between a couple seconds and a couple hundred milliseconds was a significant enough difference in user experience for me to prefer Git (and Firefox was not the only thing I was using version control for) Other people had also similarly created their own mirror, or with other tools. But none of them were "compatible": their commit hashes were different. Hg-git, used by the latter, was putting extra information in commit messages that would make the conversion differ, and hg-fast-export would just not be consistent with itself! My mirror is long gone, and those have not been updated in more than a decade. I did end up using Mercurial, when I got commit access to the Firefox source repository in April 2010. I still kept using Git for my Debian activities, but I now was also using Mercurial to push to the Mozilla servers. I joined Mozilla as a contractor a few months after that, and kept using Mercurial for a while, but as a, by then, long time Git user, it never really clicked for me. It turns out, the sentiment was shared by several at Mozilla. Git incursion In the early 2010s, GitHub was becoming ubiquitous, and the Git mindshare was getting large. Multiple projects at Mozilla were already entirely hosted on GitHub. As for the Firefox source code base, Mozilla back then was kind of a Wild West, and engineers being engineers, multiple people had been using Git, with their own inconvenient workflows involving a local Mercurial clone. The most popular set of scripts was moz-git-tools, to incorporate changes in a local Git repository into the local Mercurial copy, to then send to Mozilla servers. In terms of the number of people doing that, though, I don't think it was a lot of people, probably a few handfuls. On my end, I was still keeping up with Mercurial. I think at that time several engineers had their own unofficial Git mirrors on GitHub, and later on Ehsan Akhgari provided another mirror, with a twist: it also contained the full CVS history, which the canonical Mercurial repository didn't have. This was particularly interesting for engineers who needed to do some code archeology and couldn't get past the 2007 cutoff of the Mercurial repository. I think that mirror ultimately became the official-looking, but really unofficial, mozilla-central repository on GitHub. On a side note, a Mercurial repository containing the CVS history was also later set up, but that didn't lead to something officially supported on the Mercurial side. Some time around 2011~2012, I started to more seriously consider using Git for work myself, but wasn't satisfied with the workflows others had set up for themselves. I really didn't like the idea of wasting extra disk space keeping a Mercurial clone around while using a Git mirror. I wrote a Python script that would use Mercurial as a library to access a remote repository and produce a git-fast-import stream. That would allow the creation of a git repository without a local Mercurial clone. It worked quite well, but it was not able to incrementally update. Other, more complete tools existed already, some of which I mentioned above. But as time was passing and the size and depth of the Mercurial repository was growing, these tools were showing their limits and were too slow for my taste, especially for the initial clone. Boot to Git In the same time frame, Mozilla ventured in the Mobile OS sphere with Boot to Gecko, later known as Firefox OS. What does that have to do with version control? The needs of third party collaborators in the mobile space led to the creation of what is now the gecko-dev repository on GitHub. As I remember it, it was challenging to create, but once it was there, Git users could just clone it and have a working, up-to-date local copy of the Firefox source code and its history... which they could already have, but this was the first officially supported way of doing so. Coincidentally, Ehsan's unofficial mirror was having trouble (to the point of GitHub closing the repository) and was ultimately shut down in December 2013. You'll often find comments on the interwebs about how GitHub has become unreliable since the Microsoft acquisition. I can't really comment on that, but if you think GitHub is unreliable now, rest assured that it was worse in its beginning. And its sustainability as a platform also wasn't a given, being a rather new player. So on top of having this official mirror on GitHub, Mozilla also ventured in setting up its own Git server for greater control and reliability. But the canonical repository was still the Mercurial one, and while Git users now had a supported mirror to pull from, they still had to somehow interact with Mercurial repositories, most notably for the Try server. Git slowly creeping in Firefox build tooling Still in the same time frame, tooling around building Firefox was improving drastically. For obvious reasons, when version control integration was needed in the tooling, Mercurial support was always a no-brainer. The first explicit acknowledgement of a Git repository for the Firefox source code, other than the addition of the .gitignore file, was bug 774109. It added a script to install the prerequisites to build Firefox on macOS (still called OSX back then), and that would print a message inviting people to obtain a copy of the source code with either Mercurial or Git. That was a precursor to current bootstrap.py, from September 2012. Following that, as far as I can tell, the first real incursion of Git in the Firefox source tree tooling happened in bug 965120. A few days earlier, bug 952379 had added a mach clang-format command that would apply clang-format-diff to the output from hg diff. Obviously, running hg diff on a Git working tree didn't work, and bug 965120 was filed, and support for Git was added there. That was in January 2014. A year later, when the initial implementation of mach artifact was added (which ultimately led to artifact builds), Git users were an immediate thought. But while they were considered, it was not to support them, but to avoid actively breaking their workflows. Git support for mach artifact was eventually added 14 months later, in March 2016. From gecko-dev to git-cinnabar Let's step back a little here, back to the end of 2014. My user experience with Mercurial had reached a level of dissatisfaction that was enough for me to decide to take that script from a couple years prior and make it work for incremental updates. That meant finding a way to store enough information locally to be able to reconstruct whatever the incremental updates would be relying on (guess why other tools hid a local Mercurial clone under hood). I got something working rather quickly, and after talking to a few people about this side project at the Mozilla Portland All Hands and seeing their excitement, I published a git-remote-hg initial prototype on the last day of the All Hands. Within weeks, the prototype gained the ability to directly push to Mercurial repositories, and a couple months later, was renamed to git-cinnabar. At that point, as a Git user, instead of cloning the gecko-dev repository from GitHub and switching to a local Mercurial repository whenever you needed to push to a Mercurial repository (i.e. the aforementioned Try server, or, at the time, for reviews), you could just clone and push directly from/to Mercurial, all within Git. And it was fast too. You could get a full clone of mozilla-central in less than half an hour, when at the time, other similar tools would take more than 10 hours (needless to say, it's even worse now). Another couple months later (we're now at the end of April 2015), git-cinnabar became able to start off a local clone of the gecko-dev repository, rather than clone from scratch, which could be time consuming. But because git-cinnabar and the tool that was updating gecko-dev weren't producing the same commits, this setup was cumbersome and not really recommended. For instance, if you pushed something to mozilla-central with git-cinnabar from a gecko-dev clone, it would come back with a different commit hash in gecko-dev, and you'd have to deal with the divergence. Eventually, in April 2020, the scripts updating gecko-dev were switched to git-cinnabar, making the use of gecko-dev alongside git-cinnabar a more viable option. Ironically(?), the switch occurred to ease collaboration with KaiOS (you know, the mobile OS born from the ashes of Firefox OS). Well, okay, in all honesty, when the need of syncing in both directions between Git and Mercurial (we only had ever synced from Mercurial to Git) came up, I nudged Mozilla in the direction of git-cinnabar, which, in my (biased but still honest) opinion, was the more reliable option for two-way synchronization (we did have regular conversion problems with hg-git, nothing of the sort has happened since the switch). One Firefox repository to rule them all For reasons I don't know, Mozilla decided to use separate Mercurial repositories as "branches". With the switch to the rapid release process in 2011, that meant one repository for nightly (mozilla-central), one for aurora, one for beta, and one for release. And with the addition of Extended Support Releases in 2012, we now add a new ESR repository every year. Boot to Gecko also had its own branches, and so did Fennec (Firefox for Mobile, before Android). There are a lot of them. And then there are also integration branches, where developer's work lands before being merged in mozilla-central (or backed out if it breaks things), always leaving mozilla-central in a (hopefully) good state. Only one of them remains in use today, though. I can only suppose that the way Mercurial branches work was not deemed practical. It is worth noting, though, that Mercurial branches are used in some cases, to branch off a dot-release when the next major release process has already started, so it's not a matter of not knowing the feature exists or some such. In 2016, Gregory Szorc set up a new repository that would contain them all (or at least most of them), which eventually became what is now the mozilla-unified repository. This would e.g. simplify switching between branches when necessary. 7 years later, for some reason, the other "branches" still exist, but most developers are expected to be using mozilla-unified. Mozilla's CI also switched to using mozilla-unified as base repository. Honestly, I'm not sure why the separate repositories are still the main entry point for pushes, rather than going directly to mozilla-unified, but it probably comes down to switching being work, and not being a top priority. Also, it probably doesn't help that working with multiple heads in Mercurial, even (especially?) with bookmarks, can be a source of confusion. To give an example, if you aren't careful, and do a plain clone of the mozilla-unified repository, you may not end up on the latest mozilla-central changeset, but rather, e.g. one from beta, or some other branch, depending which one was last updated. Hosting is simple, right? Put your repository on a server, install hgweb or gitweb, and that's it? Maybe that works for... Mercurial itself, but that repository "only" has slightly over 50k changesets and less than 4k files. Mozilla-central has more than an order of magnitude more changesets (close to 700k) and two orders of magnitude more files (more than 700k if you count the deleted or moved files, 350k if you count the currently existing ones). And remember, there are a lot of "duplicates" of this repository. And I didn't even mention user repositories and project branches. Sure, it's a self-inflicted pain, and you'd think it could probably(?) be mitigated with shared repositories. But consider the simple case of two repositories: mozilla-central and autoland. You make autoland use mozilla-central as a shared repository. Now, you push something new to autoland, it's stored in the autoland datastore. Eventually, you merge to mozilla-central. Congratulations, it's now in both datastores, and you'd need to clean-up autoland if you wanted to avoid the duplication. Now, you'd think mozilla-unified would solve these issues, and it would... to some extent. Because that wouldn't cover user repositories and project branches briefly mentioned above, which in GitHub parlance would be considered as Forks. So you'd want a mega global datastore shared by all repositories, and repositories would need to only expose what they really contain. Does Mercurial support that? I don't think so (okay, I'll give you that: even if it doesn't, it could, but that's extra work). And since we're talking about a transition to Git, does Git support that? You may have read about how you can link to a commit from a fork and make-pretend that it comes from the main repository on GitHub? At least, it shows a warning, now. That's essentially the architectural reason why. So the actual answer is that Git doesn't support it out of the box, but GitHub has some backend magic to handle it somehow (and hopefully, other things like Gitea, Girocco, Gitlab, etc. have something similar). Now, to come back to the size of the repository. A repository is not a static file. It's a server with which you negotiate what you have against what it has that you want. Then the server bundles what you asked for based on what you said you have. Or in the opposite direction, you negotiate what you have that it doesn't, you send it, and the server incorporates what you sent it. Fortunately the latter is less frequent and requires authentication. But the former is more frequent and CPU intensive. Especially when pulling a large number of changesets, which, incidentally, cloning is. "But there is a solution for clones" you might say, which is true. That's clonebundles, which offload the CPU intensive part of cloning to a single job scheduled regularly. Guess who implemented it? Mozilla. But that only covers the cloning part. We actually had laid the ground to support offloading large incremental updates and split clones, but that never materialized. Even with all that, that still leaves you with a server that can display file contents, diffs, blames, provide zip archives of a revision, and more, all of which are CPU intensive in their own way. And these endpoints are regularly abused, and cause extra load to your servers, yes plural, because of course a single server won't handle the load for the number of users of your big repositories. And because your endpoints are abused, you have to close some of them. And I'm not mentioning the Try repository with its tens of thousands of heads, which brings its own sets of problems (and it would have even more heads if we didn't fake-merge them once in a while). Of course, all the above applies to Git (and it only gained support for something akin to clonebundles last year). So, when the Firefox OS project was stopped, there wasn't much motivation to continue supporting our own Git server, Mercurial still being the official point of entry, and git.mozilla.org was shut down in 2016. The growing difficulty of maintaining the status quo Slowly, but steadily in more recent years, as new tooling was added that needed some input from the source code manager, support for Git was more and more consistently added. But at the same time, as people left for other endeavors and weren't necessarily replaced, or more recently with layoffs, resources allocated to such tooling have been spread thin. Meanwhile, the repository growth didn't take a break, and the Try repository was becoming an increasing pain, with push times quite often exceeding 10 minutes. The ongoing work to move Try pushes to Lando will hide the problem under the rug, but the underlying problem will still exist (although the last version of Mercurial seems to have improved things). On the flip side, more and more people have been relying on Git for Firefox development, to my own surprise, as I didn't really push for that to happen. It just happened organically, by ways of git-cinnabar existing, providing a compelling experience to those who prefer Git, and, I guess, word of mouth. I was genuinely surprised when I recently heard the use of Git among moz-phab users had surpassed a third. I did, however, occasionally orient people who struggled with Mercurial and said they were more familiar with Git, towards git-cinnabar. I suspect there's a somewhat large number of people who never realized Git was a viable option. But that, on its own, can come with its own challenges: if you use git-cinnabar without being backed by gecko-dev, you'll have a hard time sharing your branches on GitHub, because you can't push to a fork of gecko-dev without pushing your entire local repository, as they have different commit histories. And switching to gecko-dev when you weren't already using it requires some extra work to rebase all your local branches from the old commit history to the new one. Clone times with git-cinnabar have also started to go a little out of hand in the past few years, but this was mitigated in a similar manner as with the Mercurial cloning problem: with static files that are refreshed regularly. Ironically, that made cloning with git-cinnabar faster than cloning with Mercurial. But generating those static files is increasingly time-consuming. As of writing, generating those for mozilla-unified takes close to 7 hours. I was predicting clone times over 10 hours "in 5 years" in a post from 4 years ago, I wasn't too far off. With exponential growth, it could still happen, although to be fair, CPUs have improved since. I will explore the performance aspect in a subsequent blog post, alongside the upcoming release of git-cinnabar 0.7.0-b1. I don't even want to check how long it now takes with hg-git or git-remote-hg (they were already taking more than a day when git-cinnabar was taking a couple hours). I suppose it's about time that I clarify that git-cinnabar has always been a side-project. It hasn't been part of my duties at Mozilla, and the extent to which Mozilla supports git-cinnabar is in the form of taskcluster workers on the community instance for both git-cinnabar CI and generating those clone bundles. Consequently, that makes the above git-cinnabar specific issues a Me problem, rather than a Mozilla problem. Taking the leap I can't talk for the people who made the proposal to move to Git, nor for the people who put a green light on it. But I can at least give my perspective. Developers have regularly asked why Mozilla was still using Mercurial, but I think it was the first time that a formal proposal was laid out. And it came from the Engineering Workflow team, responsible for issue tracking, code reviews, source control, build and more. It's easy to say "Mozilla should have chosen Git in the first place", but back in 2007, GitHub wasn't there, Bitbucket wasn't there, and all the available options were rather new (especially compared to the then 21 years-old CVS). I think Mozilla made the right choice, all things considered. Had they waited a couple years, the story might have been different. You might say that Mozilla stayed with Mercurial for so long because of the sunk cost fallacy. I don't think that's true either. But after the biggest Mercurial repository hosting service turned off Mercurial support, and the main contributor to Mercurial going their own way, it's hard to ignore that the landscape has evolved. And the problems that we regularly encounter with the Mercurial servers are not going to get any better as the repository continues to grow. As far as I know, all the Mercurial repositories bigger than Mozilla's are... not using Mercurial. Google has its own closed-source server, and Facebook has another of its own, and it's not really public either. With resources spread thin, I don't expect Mozilla to be able to continue supporting a Mercurial server indefinitely (although I guess Octobus could be contracted to give a hand, but is that sustainable?). Mozilla, being a champion of Open Source, also doesn't live in a silo. At some point, you have to meet your contributors where they are. And the Open Source world is now majoritarily using Git. I'm sure the vast majority of new hires at Mozilla in the past, say, 5 years, know Git and have had to learn Mercurial (although they arguably didn't need to). Even within Mozilla, with thousands(!) of repositories on GitHub, Firefox is now actually the exception rather than the norm. I should even actually say Desktop Firefox, because even Mobile Firefox lives on GitHub (although Fenix is moving back in together with Desktop Firefox, and the timing is such that that will probably happen before Firefox moves to Git). Heck, even Microsoft moved to Git! With a significant developer base already using Git thanks to git-cinnabar, and all the constraints and problems I mentioned previously, it actually seems natural that a transition (finally) happens. However, had git-cinnabar or something similarly viable not existed, I don't think Mozilla would be in a position to take this decision. On one hand, it probably wouldn't be in the current situation of having to support both Git and Mercurial in the tooling around Firefox, nor the resource constraints related to that. But on the other hand, it would be farther from supporting Git and being able to make the switch in order to address all the other problems. But... GitHub? I hope I made a compelling case that hosting is not as simple as it can seem, at the scale of the Firefox repository. It's also not Mozilla's main focus. Mozilla has enough on its plate with the migration of existing infrastructure that does rely on Mercurial to understandably not want to figure out the hosting part, especially with limited resources, and with the mixed experience hosting both Mercurial and git has been so far. After all, GitHub couldn't even display things like the contributors' graph on gecko-dev until recently, and hosting is literally their job! They still drop the ball on large blames (thankfully we have searchfox for those). Where does that leave us? Gitlab? For those criticizing GitHub for being proprietary, that's probably not open enough. Cloud Source Repositories? "But GitHub is Microsoft" is a complaint I've read a lot after the announcement. Do you think Google hosting would have appealed to these people? Bitbucket? I'm kind of surprised it wasn't in the list of providers that were considered, but I'm also kind of glad it wasn't (and I'll leave it at that). I think the only relatively big hosting provider that could have made the people criticizing the choice of GitHub happy is Codeberg, but I hadn't even heard of it before it was mentioned in response to Mozilla's announcement. But really, with literal thousands of Mozilla repositories already on GitHub, with literal tens of millions repositories on the platform overall, the pragmatic in me can't deny that it's an attractive option (and I can't stress enough that I wasn't remotely close to the room where the discussion about what choice to make happened). "But it's a slippery slope". I can see that being a real concern. LLVM also moved its repository to GitHub (from a (I think) self-hosted Subversion server), and ended up moving off Bugzilla and Phabricator to GitHub issues and PRs four years later. As an occasional contributor to LLVM, I hate this move. I hate the GitHub review UI with a passion. At least, right now, GitHub PRs are not a viable option for Mozilla, for their lack of support for security related PRs, and the more general shortcomings in the review UI. That doesn't mean things won't change in the future, but let's not get too far ahead of ourselves. The move to Git has just been announced, and the migration has not even begun yet. Just because Mozilla is moving the Firefox repository to GitHub doesn't mean it's locked in forever or that all the eggs are going to be thrown into one basket. If bridges need to be crossed in the future, we'll see then. So, what's next? The official announcement said we're not expecting the migration to really begin until six months from now. I'll swim against the current here, and say this: the earlier you can switch to git, the earlier you'll find out what works and what doesn't work for you, whether you already know Git or not. While there is not one unique workflow, here's what I would recommend anyone who wants to take the leap off Mercurial right now: As there is no one-size-fits-all workflow, I won't tell you how to organize yourself from there. I'll just say this: if you know the Mercurial sha1s of your previous local work, you can create branches for them with:
$ git branch <branch_name> $(git cinnabar hg2git <hg_sha1>)
At this point, you should have everything available on the Git side, and you can remove the .hg directory. Or move it into some empty directory somewhere else, just in case. But don't leave it here, it will only confuse the tooling. Artifact builds WILL be confused, though, and you'll have to ./mach configure before being able to do anything. You may also hit bug 1865299 if your working tree is older than this post. If you have any problem or question, you can ping me on #git-cinnabar or #git on Matrix. I'll put the instructions above somewhere on wiki.mozilla.org, and we can collaboratively iterate on them. Now, what the announcement didn't say is that the Git repository WILL NOT be gecko-dev, doesn't exist yet, and WON'T BE COMPATIBLE (trust me, it'll be for the better). Why did I make you do all the above, you ask? Because that won't be a problem. I'll have you covered, I promise. The upcoming release of git-cinnabar 0.7.0-b1 will have a way to smoothly switch between gecko-dev and the future repository (incidentally, that will also allow to switch from a pure git-cinnabar clone to a gecko-dev one, for the git-cinnabar users who have kept reading this far). What about git-cinnabar? With Mercurial going the way of the dodo at Mozilla, my own need for git-cinnabar will vanish. Legitimately, this begs the question whether it will still be maintained. I can't answer for sure. I don't have a crystal ball. However, the needs of the transition itself will motivate me to finish some long-standing things (like finalizing the support for pushing merges, which is currently behind an experimental flag) or implement some missing features (support for creating Mercurial branches). Git-cinnabar started as a Python script, it grew a sidekick implemented in C, which then incorporated some Rust, which then cannibalized the Python script and took its place. It is now close to 90% Rust, and 10% C (if you don't count the code from Git that is statically linked to it), and has sort of become my Rust playground (it's also, I must admit, a mess, because of its history, but it's getting better). So the day to day use with Mercurial is not my sole motivation to keep developing it. If it were, it would stay stagnant, because all the features I need are there, and the speed is not all that bad, although I know it could be better. Arguably, though, git-cinnabar has been relatively stagnant feature-wise, because all the features I need are there. So, no, I don't expect git-cinnabar to die along Mercurial use at Mozilla, but I can't really promise anything either. Final words That was a long post. But there was a lot of ground to cover. And I still skipped over a bunch of things. I hope I didn't bore you to death. If I did and you're still reading... what's wrong with you? ;) So this is the end of Mercurial at Mozilla. So long, and thanks for all the fish. But this is also the beginning of a transition that is not easy, and that will not be without hiccups, I'm sure. So fasten your seatbelts (plural), and welcome the change. To circle back to the clickbait title, did I really kill Mercurial at Mozilla? Of course not. But it's like I stumbled upon a few sparks and tossed a can of gasoline on them. I didn't start the fire, but I sure made it into a proper bonfire... and now it has turned into a wildfire. And who knows? 15 years from now, someone else might be looking back at how Mozilla picked Git at the wrong time, and that, had we waited a little longer, we would have picked some yet to come new horse. But hey, that's the tech cycle for you.

18 November 2023

Bits from Debian: Debian Events: MiniDebConfCambridge-2023

MiniConfLogo Next week the #MiniDebConfCambridge takes place in Cambridge, UK. This event will run from Thursday 23 to Sunday 26 November 2023. The 4 days of the MiniDebConf include a Mini-DebCamp and of course the main Conference talks, BoFs, meets, and Sprints. We give thanks to our partners and sponsors for this event Arm - Building the Future of Computing Codethink - Open Source System Software Experts pexip - Powering video everywhere Please see the MiniDebConfCambridge page more for information regarding Travel documentation, Accomodation, Meal planning, the full conference schedule, and yes, even parking. We hope to see you there!

15 November 2023

Scarlett Gately Moore: Farewell for now, again.

I write this with a heavy heart. Exactly one year ago I lost my beloved job. That was all me, I had a terrible run of bad luck with COVID and I never caught up. In the last year, I have taken on several new projects to re-create a new image for myself and to make up for the previous year, and I believe I worked very hard in doing so. Unfortunately, my seemingly good interviews have not ended in a job. One potential job I did not put as much effort into as I should have because I put all my cards into a project that didn t quite turn out as expected. I do hope it still goes through for the KDE community as a whole, because well it is really cool, but it isn t the job I thought. I have been relying purely on donations for survival and it simply isn t enough. I am faced once again with no internet to even do my open source work ( Snaps, KDE neon, Debian and everything that links to those ). I simply can t put the burden of my stubbornness on my family any longer. Bills are long over due, we have learned to live without many things, but the stress of essential bills, living expenses going unpaid is simply too much. I do thank each and every one of you that has contributed to my fundraisers. It means the world to me that people do care. It just isn t enough. So with the sunset of Witch Wells, I am sun setting my software career for now and will be looking for something, anything local just to pay some bills, calm our nerves and hopefully find some happiness again. I am tired, broke, stressed out and burned out. I will be back when I can breathe again with my finances. If you can spare some changes to help with gas, propane, internet I would be so ever grateful. So long for now. ~ Scarlett https://gofund.me/1346869d

1 November 2023

Dirk Eddelbuettel: RcppArmadillo 0.12.6.6.0 on CRAN: Bugfix, Thread Throttling

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1110 other packages on CRAN, downloaded 31.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 563 times according to Google Scholar. This release brings upstream bugfix releases 12.6.5 (sparse matrix corner case) and 12.6.6 with an ARPACK correction. Conrad released it this this morning, I had been running reverse dependency checks anyway and knew we were in good shape so for once I did not await a full run against the now over 1100 (!!) packages using RcppArmadillo. This release also contains a change I prepared on Sunday and which helps with much-criticized (and rightly I may add) insistence by CRAN concerning throttling . The motivation is understandable: CRAN tests many packages at once on beefy servers and can ill afford tests going off and requesting numerous cores. But rather than providing a global setting at their end, CRAN insists that each package (!!) deals with this. The recent traffic on the helpful-as-ever r-pkg-devel mailing clearly shows that this confuses quite a few package developers. Some have admitted to simply turning examples and tests off: a net loss for all of us. Now, Armadillo defaults to using up to eight cores (which is enough to upset CRAN) when running with OpenMP (which is generally only on Linux for reasons I rather not get into ). With this release I expose a helper functions (from OpenMP) to limit this. I also set up an example package and repo RcppArmadilloOpenMPEx detailing this, and added a demonstration of how to use the new throttlers to the fastLm example. I hope this proves useful to users of the package. The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.6.6.0 (2023-10-31)
  • Upgraded to Armadillo release 12.6.6 (Cortisol Retox)
    • Fix eigs_sym(), eigs_gen() and svds() to generate deterministic results in ARPACK mode
  • Add helper functions to set and get the number of OpenMP threads
  • Store initial thread count at package load and use in thread-throttling helper (and resetter) suitable for CRAN constraints

Changes in RcppArmadillo version 0.12.6.5.0 (2023-10-14)
  • Upgraded to Armadillo release 12.6.5 (Cortisol Retox)
    • Fix for corner-case bug in handling sparse matrices with no non-zero elements

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 October 2023

Russell Coker: Hello Kitty

I ve just discovered a new xterm replacement named Kitty [1]. It boasts about being faster due to threading and using the GPU and it does appear faster on some of my systems but that s not why I like it. A trend in terminal programs in recent years has been tabbed operation so you can have multiple sessions in one OS window, this is something I ve never liked just as I ve never liked using Screen to switch between sessions when I had the option of just having multiple sessions on screen. The feature that I like most about Kitty is the ability to have a grid based layout of sessions in one OS window. Instead of having 16 OS windows on my workstation or 4 OS windows on a laptop with different entries in the window list and the possibility of them getting messed up if the OS momentarily gets confused about the screen size (a common issue with laptop use) I can just have 1 Kitty window that has all the sessions running. Kitty has Kitten processes that can do various things, one is icat which displays an image file to the terminal and leaves it in the scroll-back buffer. I put the following shell code in one of the scripts called from .bashrc to setup an alias for icat.
if [ "$TERM" == "xterm-kitty" ]; then
  alias icat='kitty +kitten icat'
fi
The kitten interface can be supported by other programs. The version of the mpv video player in Debian/Unstable has a --vo=kitty option which is an interesting feature. However playing a video in a Kitty window that takes up 1/4 of the screen on my laptop takes a bit over 100% of a CPU core for mpv and about 10% to 20% for Kitty which gives a total of about 120% CPU use on my i5-6300U compared to about 20% for mpv using wayland directly. The option to make it talk to Kitty via shared memory doesn t improve things. Using this effectively requires installing the kitty-terminfo package on every system you might ssh to. But you can set the term type to xterm-256color when logged in to a system without the kitty terminfo installed. The fact that icat and presumably other advanced terminal functions work over ssh by default is a security concern, but this also works with Konsole and will presumably be added to other terminal emulators so it s a widespread problem that needs attention. There is support for desktop notifications in the Kitty terminal encoding [2]. One of the things I m interested in at the moment is how to best manage notifications on converged systems (phone and desktop) so this is something I ll have to investigate. Overall Kitty has some great features and definitely has the potential to improve productivity for some work patterns. There are some security concerns that it raises through closer integration between systems and between programs, but many of them aren t exclusive to Kitty.

Valhalla's Things: Forgotten Yeast Bread or Pan Sbagliato

Posted on October 29, 2023
a wide and flat round loaf of bread with a well cooked crust I ve made it again. And again. And a few more times, and now it has an official household name, Pan Sbagliato , or Wrong Bread . And this is the procedure I ve mostly settled on; starting on the day before (here called Saturday) and baking it so that it s ready for lunch time (on what here is called Sunday). Saturday: around 13:00
In a bowl, mix together and work well:
  • 250 g water;
  • 400 g flour;
  • 8 g salt;
cover to rise.
Saturday: around 18:00
In a small bowl, mix together:
  • 2-3 g yeast;
  • 10 g water;
  • 10 g flour.
Saturday: around 21:00
In the bowl with the original dough, add the contents of the small bowl plus:
  • 100 g flour;
  • 100 g water;
and work well; cover to rise overnight.
Sunday: around 8:00
Pour the dough on a lined oven tray, leave in the cold oven to rise.
Sunday: around 11:00
Remove the tray from the oven, preheat the oven to 240 C, bake for 10 minutes, then lower the temperature to 160 C and bake for 20 more minutes. Waiting until it has cooled down a bit will make it easier to cut, but is not strictly necessary.
the loaf cut in half, to show thin stripes of crumb from the high hydration. I ve had up to a couple of hours variations in the times listed, with no ill effects.

21 October 2023

Russell Coker: More About the PineTime

Since my initial review of the PineTime 10 days ago [1] I ve used it in more situations. My initial tests were done connecting to a Huawei Nova 7i [2], I am now using it with a Huawei Mate 10 Pro. I ve also upgraded the PineTime from version 1.11 (from memory) of the Infinitime software that runs on the watch to version 1.13 [3]. To upgrade it I had to download the file pinetime-mcuboot-app-dfu-1.13.0.zip to the Android phone and then use the File Installer option of the GadgetBridge Android app to upload it. The zip file does NOT need to be extracted first, I don t know if GadgetBridge extracts it before upload or if the PineTime firmware has a copy of unzip, but it just works. Version 1.13 is purported to take less battery, I haven t directly verified this as I turned on the new feature of measuring my pulse 24*7 which significantly increases battery use. The end result is that the battery is being used up at about the same rate as before, overall adding a new battery-hungry feature while reducing battery use for other things to compensate is a good thing and strongly suggests that battery use has decreased overall. I have noticed that now with a different phone and different version of the firmware it doesn t reconnect as reliably. Sometimes I need to turn bluetooth on the watch off and on before it works (which indicates an issue with the firmware) and sometimes I need to turn bluetooth off and on on the phone which indicates a phone issue. Also I often unlock my phone to find the GadgetBridge notification saying that it s disconnected and it usually connects fine, but I get the impression it s often disconnected. Does the Mate 10 Pro have a problem that triggers a bug in the PineTime? Does the 1.13 version of InfiniTime have a problem that triggers a bug in the Mate 10 Pro? Are they both independently buggy? Is the new version of InfiniTime just disconnecting when it s not doing stuff to save battery and triggering bugs that weren t obvious before? I ve tested the media control which basically works, sometimes it gets out of sync and displays the name of the previous track which is annoying. The PineTime is IP67 rated and there are reports on Reddit of people wearing it in the shower and swimming pool. I wouldn t recommend those things although it should work OK. It might be an option for controlling music when in the bath or when having a pool party. When the watch is running normally and displays a new notification it s not possible to swipe it away. You have to go to the notifications menu afterwards to swipe them which I find annoying. Also the notification of an inbound call remains in the notification list indefinitely while I think a more appropriate action is to have it disappear in an amount of time where it s already been answered or gone to voicemail. Voicemail timeouts are as low as 15 seconds so having the notification disappear after 1 minute would be reasonable. I have configured my PineTime to take 2 taps on the screen to wake up. I previously had it set to 1 tap and had problems with accidentally doing something it registered as a tap while in bed and waking me up. Also I found that if I want to turn the screen on when my hands are dirty so I don t want to touch it with a finger then tapping it on my nose works well. Apparently it is programmed to ignore taps on large areas so I can t wake it with my elbow. I ve setup a PineTime for an elderly relative who is greatly enjoying it. I don t expect them to flash new firmware or do any other complex things, but they are doing well with using the device. They are considering getting a different band as they don t like rubber. I m sure their local jeweler has some leather and metal bands that could fit. There is a design on Thiniverse for a PineTime case [4], this could be used for making an adaptor to fit a PineTime to a greatly different type of band, an instrument console, etc. Generally I think the PineTime is an OK smart watch for someone who s not into FOSS for it s own sake. My relative could have been happy with a slightly cheaper watch, but it s still significantly cheaper than the Samsung and Apple options so it s not particularly expensive. A benefit for them is that having the same type of SmartWatch as me they will get better tech support.

14 October 2023

Ravi Dwivedi: Kochi - Wayanad Trip in August-September 2023

A trip full of hitchhiking, beautiful places and welcoming locals.

Day 1: Arrival in Kochi Kochi is a city in the state of Kerala, India. This year s DebConf was to be held in Kochi from 3rd September to 17th of September, which I was planning to attend. My friend Suresh, who was planning to join, told me that 29th August 2023 will be Onam, a major festival of the state of Kerala. So, we planned a Kerala trip before the DebConf. We booked early morning flights for Kochi from Delhi and reached Kochi on 28th August. We had booked a hostel named Zostel in Ernakulam. During check-in, they asked me to fill a form which required signing in using a Google account. I told them I don t have a Google account and I don t want to create one either. The people at the front desk seemed receptive, so I went ahead with telling them the problems of such a sign-in being mandatory for check-in. Anyways, they only took a photo of my passport and let me check-in without a Google account. We stayed in a ten room dormitory, which allowed travellers of any gender. The dormitory room was air-conditioned, spacious, clean and beds were also comfortable. There were two bathrooms in the dormitory and they were clean. Plus, there was a separate dormitory room in the hostel exclusive for females. I noticed that that Zostel was not added in the OpenStreetMap and so, I added it :) . The hostel had a small canteen for tea and snacks, a common sitting area outside the dormitories, which had beds too. There was a separate silent room, suitable for people who want to work.
Dormitory room in Zostel Ernakulam, Kochi.
Beds in Zostel Ernakulam, Kochi.
We had lunch at a nearby restaurant and it was hard to find anything vegetarian for me. I bought some freshly made banana chips from the street and they were tasty. As far as I remember, I had a big glass of pineapple juice for lunch. Then I went to the Broadway market and bought some cardamom and cinnamon for home. I also went to a nearby supermarket and bought Matta brown rice for home. Then, I looked for a courier shop to send the things home but all of them were closed due to Onam festival. After returning to the Zostel, I overslept till 9 PM and in the meanwhile, Suresh planned with Saidut and Shwetank (who met us during our stay in Zostel) to go to a place in Fort Kochi for dinner. I suspected I will be disappointed by lack of vegetarian options as they were planning to have fish. I already had a restaurant in mind - Brindhavan restaurant (suggested by Anupa), which was a pure vegetarian restaurant. To reach there, I got off at Palarivattom metro station and started looking for an auto-rickshaw to get to the restaurant. I didn t get any for more than 5 minutes. Since that restaurant was not added to the OpenStreetMap, I didn t even know how far that was and which direction to go to. Then, I saw a Zomato delivery person on a motorcycle and asked him where the restaurant was. It was already 10 PM and the restaurant closes at 10:30. So, I asked him whether he can drop me off. He agreed and dropped me off at that restaurant. It was 4-5 km from that metro station. I tipped him and expressed my gratefulness for the help. He refused to take the tip, but I insisted and he accepted. I entered the restaurant and it was coming to a close, so many items were not available. I ordered some Kadhai Paneer (only item left) with naan. It tasted fine. Since the next day was Thiruvonam, I asked the restaurant about the Sadya thali menu and prices for the next day. I planned to eat Sadya thali at that restaurant, but my plans got changed later.
Onam sadya menu from Brindhavan restaurant.

Day 2: Onam celebrations Next day, on 29th of August 2023, we had plan to leave for Wayanad. Wayanad is a hill station in Kerala and a famous tourist spot. Praveen suggested to visit Munnar as it is far closer to Kochi than Wayanad (80 km vs 250 km). But I had already visited Munnar in my previous trips, so we chose Wayanad. We had a train late night from Ernakulam Junction (at 23:30 hours) to Kozhikode, which is the nearest railway station from Wayanad. So, we checked out in the morning as we had plans to roam around in Kochi before taking the train. Zostel was celebrating Onam on that day. To opt-in, we had to pay 400 rupees, which included a Sadya Thali and a mundu. Me and Suresh paid the amount and opted in for the celebrations. Sadya thali had Rice, Sambhar, Rasam, Avial, Banana Chips, Pineapple Pachadi, Pappadam, many types of pickels and chutneys, Pal Ada Payasam and Coconut jaggery Pasam. And, there was water too :). Those payasams were really great and I had one more round of them. Later, I had a lot of variety of payasams during the DebConf.
Sadya lined up for serving
Sadya thali served on banana leaf.
So, we hung out in the common room and put our luggage there. We played UNO and had conversations with other travellers in the hostel. I had a fun time there and I still think it is one of the best hostel experiences I had. We made good friends with Saiduth (Telangana) and Shwetank (Uttarakhand). They were already aware about the software like debian, and we had some detailed conversations about the Free Software movement. I remember explaining the difference between the terms Open Source and Free Software . I also told them about the Streetcomplete app, a beginner friendly app to edit OpenStreetMap. We had dinner at a place nearby (named Palaraam), but again, the vegetarian options were very limited! After dinner, we came back to the Zostel and me and Suresh left for Ernakulam Junction to catch our train Maveli Express (16604).

Day 3: Going to Wayanad Maveli Express was scheduled to reach Kozhikode at 03:25 (morning). I had set alarms from 03:00 to 03:30, with the gap of 10 minutes. Every time I woke up, I turned off the alarm. Then I woke up and saw train reaching the Kozhikode station and woke up Suresh for deboarding. But then I noticed that the train is actually leaving the station, not arriving! This means we missed our stop. Now we looked at the next stops and whether we can deboard there. I was very sleepy and wanted to take a retiring room at some station before continuing our journey to Wayanad. The next stop was Quilandi and we checked online that it didn t have a retiring room. So, we skipped this stop. We got off at the next stop named Vadakara and found out no retiring room was available. So, we asked about information regarding bus for Wayanad and they said that there is a bus to Wayanad around 07:00 hours from bus station which was a few kilometres from the railway station. We took a bus for Kalpetta (in Wayanad) at around 07:00. The destination of the buses were written in Malayalam, which we could not read. Once again, the locals helped us to get on to the bus to Kalpetta. Vadakara is not a big city and it can be hard to find people who know good Hindi or English, unlike Kochi. Despite language issues, I had no problem there in navigation, thanks to locals. I mostly spent time sleeping during the bus journey. A few hours later, the bus dropped us at Kalpetta. We had a booking at a hostel in Rippon village. It was 16 km from Kalpetta. On the way, we were treated with beautiful views of nature, which was present everywhere in Wayanad. The place was covered with tea gardens and our eyes were treated with beautiful scenery at every corner.
We were treated with such views during the Wayanad trip.
Rippon village was a very quiet place and I liked the calm atmosphere. This place is blessed by nature and has stunning scenery. I found English was more common than Hindi in Wayanad. Locals were very nice and helped me, even if they didn t know my language.
A road in Rippon.
After catching some sleep at the hostel, I went out in the afternoon. I hitchhiked to reach the main road from the hostel. I bought more spices from a nearby shop and realized that I should have waited for my visit to Wayanad to buy cardamom, which I already bought from Kochi. Then, I was looking for post office to send spices home. The people at the spices shop told me that the nearby Rippon post office was closed by that time, but the post office at Meppadi was open, which was 5 km from there. I went to Meppadi and saw the post office closes at 15:00, but I reached five minutes late. My packing was not very good and they asked me to pack it tighter. There was a shop near the post office and the people there gave me a cardboard and tapes, and helped pack my stuff for the post. By the time I went to the post office again, it was 15:30. But they accepted my parcel for post.

Day 4: Kanthanpara Falls, Zostel Wayanad and Karapuzha Dam Kanthanpara waterfalls were 2 km from the hostel. I hitchhiked to the place from the hostel on a scooty. Entry ticket was worth Rs 40. There were good views inside and nothing much to see except the waterfalls.
Entry to Kanthanpara Falls.
Kanthanpara Falls.
We had a booking at Zostel Wayanad for this day and so we shifted there. Again, as with their Ernakulam branch, they asked me to fill a form which required signing in using Google, but when I said I don t have a Google account they checked me in without that. There were tea gardens inside the Zostel boundaries and the property was beautiful.
A view of Zostel Wayanad.
A map of Wayanad showing tourist places.
A view from inside the Zostel Wayanad property.
Later in the evening, I went to Karapuzha Dam. I witnessed a beautiful sunset during the journey. Karapuzha dam had many activites, like ziplining, and was nice to roam around. Chembra Peak is near to the Zostel Wayanad. So, I was planning to trek to the heart shaped lake. It was suggested by Praveen and looking online, this trek seemed worth doing. There was an issue however. The charges for trek were Rs 1770 for upto five people. So, if I go alone I will have to spend Rs 1770 for the trek. If I go with another person, we split Rs 1770 into two, and so on. The optimal way to do it is to go in a group of five (you included :D). I asked front desk at Zostel if they can connect me with people going to Chembra peak the next day, and they told me about a group of four people planning to go to Chembra peak the next day. I got lucky! All four of them were from Kerala and worked in Qatar.

Day 5: Chembra peak trek The date was 1st September 2023. I woke up early (05:30 in the morning) for the Chembra peak trek. I had bought hiking shoes especially for trekking, which turned out to be a very good idea. The ticket counter opens at 07:00. The group of four with which I planned to trek met me around 06:00 in the Zostel. We went to the ticket counter around 06:30. We had breakfast at shops selling Maggi noodles and bread omlette near the ticket counter. It was a hot day and the trek was difficult for an inexperienced person like me. The scenery was green and beautiful throughout.
Terrain during trekking towards the Chembra peak.
Heart-shaped lake at the Chembra peak.
Me at the heart-shaped lake.
Views from the top of the Chembra peak.
View of another peak from the heart-shaped lake.
While returning from the trek, I found out a shop selling bamboo rice, which I bought and will make bamboo rice payasam out of it at home (I have some coconut milk from Kerala too ;)). We returned to Zostel in the afternoon. I had muscle pain after the trek and it has still not completely disappeared. At night, we took a bus from Kalpetta to Kozhikode in order to return to Kochi.

Day 6: Return to Kochi At midnight of 2nd of September, we reached Kozhikode bus stand. Then we roamed around for something to eat. I didn t find anything vegetarian to eat. No surprises there! Then we went to Kozhikode railway station and looked for retiring rooms, but no luck there. We waited at the station and took the next train to Kochi at 03:30 and reached Ernakulam Junction at 07:30 (half hours before train s scheduled time!). From there, we went to Zostel Fort Kochi and stayed one night there and checked out next morning.

Day 7: Roaming around in Fort Kochi On 3rd of September, we roamed around in Fort Kochi. We visited the usual places - St Francis Church, Dutch Palace, Jew Town, Pardesi Synagogue. I also visited some homestays and the owners were very happy to show their place even when I made it clear that I was not looking for a stay. In the evening, we went to Kakkanad to attend DebConf. The story continues in my DebConf23 blog post.

12 October 2023

Jonathan McDowell: Installing Debian on the BananaPi M2 Zero

My previously mentioned C.H.I.P. repurposing has been partly successful; I ve found a use for it (which I still need to write up), but unfortunately it s too useful and the fact it s still a bit flaky has become a problem. I spent a while trying to isolate exactly what the problem is (I m still seeing occasional hard hangs with no obvious debug output in the logs or on the serial console), then realised I should just buy one of the cheap ARM SBC boards currently available. The C.H.I.P. is based on an Allwinner R8, which is a single ARM v7 core (an A8). So it s fairly low power by today s standards and it seemed pretty much any board would probably do. I considered a Pi 2 Zero, but couldn t be bothered trying to find one in stock at a reasonable price (I ve had one on backorder from CPC since May 2022, and yes, I know other places have had them in stock since but I don t need one enough to chase and I m now mostly curious about whether it will ever ship). As the title of this post gives away, I settled on a Banana Pi BPI-M2 Zero, which is based on an Allwinner H3. That s a quad-core ARM v7 (an A7), so a bit more oompfh than the C.H.I.P. All in all it set me back 25, including a set of heatsinks that form a case around it. I started with the vendor provided Debian SD card image, which is based on Debian 9 (stretch) and so somewhat old. I was able to dist-upgrade my way through buster and bullseye, and end up on bookworm. I then discovered the bookworm 6.1 kernel worked just fine out of the box, and even included a suitable DTB. Which got me thinking about whether I could do a completely fresh Debian install with minimal tweaking. First thing, a boot loader. The Allwinner chips are nice in that they ll boot off SD, so I just needed a suitable u-boot image. Rather than go with the vendor image I had a look at mainline and discovered it had support! So let s build a clean image:
noodles@buildhost:~$ mkdir ~/BPI
noodles@buildhost:~$ cd ~/BPI
noodles@buildhost:~/BPI$ ls
noodles@buildhost:~/BPI$ git clone https://source.denx.de/u-boot/u-boot.git
Cloning into 'u-boot'...
remote: Enumerating objects: 935825, done.
remote: Counting objects: 100% (5777/5777), done.
remote: Compressing objects: 100% (1967/1967), done.
remote: Total 935825 (delta 3799), reused 5716 (delta 3769), pack-reused 930048
Receiving objects: 100% (935825/935825), 186.15 MiB   2.21 MiB/s, done.
Resolving deltas: 100% (785671/785671), done.
noodles@buildhost:~/BPI$ mkdir u-boot-build
noodles@buildhost:~/BPI$ cd u-boot
noodles@buildhost:~/BPI/u-boot$ git checkout v2023.07.02
...
HEAD is now at 83cdab8b2c Prepare v2023.07.02
noodles@buildhost:~/BPI/u-boot$ make O=../u-boot-build bananapi_m2_zero_defconfig
  HOSTCC  scripts/basic/fixdep
  GEN     Makefile
  HOSTCC  scripts/kconfig/conf.o
  YACC    scripts/kconfig/zconf.tab.c
  LEX     scripts/kconfig/zconf.lex.c
  HOSTCC  scripts/kconfig/zconf.tab.o
  HOSTLD  scripts/kconfig/conf
#
# configuration written to .config
#
make[1]: Leaving directory '/home/noodles/BPI/u-boot-build'
noodles@buildhost:~/BPI/u-boot$ cd ../u-boot-build/
noodles@buildhost:~/BPI/u-boot-build$ make CROSS_COMPILE=arm-linux-gnueabihf-
  GEN     Makefile
scripts/kconfig/conf  --syncconfig Kconfig
...
  LD      spl/u-boot-spl
  OBJCOPY spl/u-boot-spl-nodtb.bin
  COPY    spl/u-boot-spl.bin
  SYM     spl/u-boot-spl.sym
  MKIMAGE spl/sunxi-spl.bin
  MKIMAGE u-boot.img
  COPY    u-boot.dtb
  MKIMAGE u-boot-dtb.img
  BINMAN  .binman_stamp
  OFCHK   .config
noodles@buildhost:~/BPI/u-boot-build$ ls -l u-boot-sunxi-with-spl.bin
-rw-r--r-- 1 noodles noodles 494900 Aug  8 08:06 u-boot-sunxi-with-spl.bin
I had the advantage here of already having a host setup to cross build armhf binaries, but this was all done on a Debian bookworm host with packages from main. I ve put my build up here in case it s useful to someone - everything else below can be done on a normal x86_64 host. Next I needed a Debian installer. I went for the netboot variant - although I was writing it to SD rather than TFTP booting I wanted as much as possible to come over the network.
noodles@buildhost:~/BPI$ wget https://deb.debian.org/debian/dists/bookworm/main/installer-armhf/20230607%2Bdeb12u1/images/netboot/netboot.tar.gz
...
2023-08-08 10:15:03 (34.5 MB/s) -  netboot.tar.gz  saved [37851404/37851404]
noodles@buildhost:~/BPI$ tar -axf netboot.tar.gz
Then I took a suitable microSD card and set it up with a 500M primary VFAT partition, leaving the rest for Linux proper. I could have got away with a smaller VFAT partition but I d initially thought I might need to put some more installation files on it.
noodles@buildhost:~/BPI$ sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): o
Created a new DOS (MBR) disklabel with disk identifier 0x793729b3.
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-60440575, default 2048):
Last sector, +/-sectors or +/-size K,M,G,T,P  (2048-60440575, default 60440575): +500M
Created a new partition 1 of type 'Linux' and of size 500 MiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): c
Changed type of partition 'Linux' to 'W95 FAT32 (LBA)'.
Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (2-4, default 2):
First sector (1026048-60440575, default 1026048):
Last sector, +/-sectors or +/-size K,M,G,T,P  (534528-60440575, default 60440575):
Created a new partition 2 of type 'Linux' and of size 28.3 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
$ sudo mkfs -t vfat -n BPI-UBOOT /dev/sdb1
mkfs.fat 4.2 (2021-01-31)
The bootloader image gets written 8k into the SD card (our first partition starts at sector 2048, i.e. 1M into the device, so there s plenty of space here):
noodles@buildhost:~/BPI$ sudo dd if=u-boot-build/u-boot-sunxi-with-spl.bin of=/dev/sdb bs=1024 seek=8
483+1 records in
483+1 records out
494900 bytes (495 kB, 483 KiB) copied, 0.0282234 s, 17.5 MB/s
Copy the Debian installer files onto the VFAT partition:
noodles@buildhost:~/BPI$ cp -r debian-installer/ /media/noodles/BPI-UBOOT/
Unmount the SD from the build host, pop it into the M2 Zero, boot it up while connected to the serial console, hit a key to stop autoboot and tell it to boot the installer:
U-Boot SPL 2023.07.02 (Aug 08 2023 - 09:05:44 +0100)
DRAM: 512 MiB
Trying to boot from MMC1
U-Boot 2023.07.02 (Aug 08 2023 - 09:05:44 +0100) Allwinner Technology
CPU:   Allwinner H3 (SUN8I 1680)
Model: Banana Pi BPI-M2-Zero
DRAM:  512 MiB
Core:  60 devices, 17 uclasses, devicetree: separate
WDT:   Not starting watchdog@1c20ca0
MMC:   mmc@1c0f000: 0, mmc@1c10000: 1
Loading Environment from FAT... Unable to read "uboot.env" from mmc0:1...
In:    serial
Out:   serial
Err:   serial
Net:   No ethernet found.
Hit any key to stop autoboot:  0
=> setenv dibase /debian-installer/armhf
=> fatload mmc 0:1 $ kernel_addr_r  $ dibase /vmlinuz
5333504 bytes read in 225 ms (22.6 MiB/s)
=> setenv bootargs "console=ttyS0,115200n8"
=> fatload mmc 0:1 $ fdt_addr_r  $ dibase /dtbs/sun8i-h2-plus-bananapi-m2-zero.dtb
25254 bytes read in 7 ms (3.4 MiB/s)
=> fdt addr $ fdt_addr_r  0x40000
Working FDT set to 43000000
=> fatload mmc 0:1 $ ramdisk_addr_r  $ dibase /initrd.gz
31693887 bytes read in 1312 ms (23 MiB/s)
=> bootz $ kernel_addr_r  $ ramdisk_addr_r :$ filesize  $ fdt_addr_r 
Kernel image @ 0x42000000 [ 0x000000 - 0x516200 ]
## Flattened Device Tree blob at 43000000
   Booting using the fdt blob at 0x43000000
Working FDT set to 43000000
   Loading Ramdisk to 481c6000, end 49fffc3f ... OK
   Loading Device Tree to 48183000, end 481c5fff ... OK
Working FDT set to 48183000
Starting kernel ...
At this point the installer runs and you can do a normal install. Well, except the wifi wasn t detected, I think because the netinst images don t include firmware. I spent a bit of time trying to figure out how to include it but ultimately ended up installing over a USB ethernet dongle, which Just Worked and was less faff. Installing firmware-brcm80211 once installation completed allowed the built-in wifi to work fine. After install you need to configure u-boot to boot without intervention. At the u-boot prompt (i.e. after hitting a key to stop autoboot):
=> setenv bootargs "console=ttyS0,115200n8 root=LABEL=BPI-ROOT ro"
=> setenv bootcmd 'ext4load mmc 0:2 $ fdt_addr_r  /boot/sun8i-h2-plus-bananapi-m2-zero.dtb ; fdt addr $ fdt_addr_r  0x40000 ; ext4load mmc 0:2 $ kernel_addr_r  /boot/vmlinuz ; ext4load mmc 0:2 $ ramdisk_addr_r  /boot/initrd.img ; bootz $ kernel_addr_r  $ ramdisk_addr_r :$ filesize  $ fdt_addr_r '
=> saveenv
Saving Environment to FAT... OK
=> reset
This is assuming you have /boot on partition 2 on the SD - I left the first partition as VFAT (that s where the u-boot environment will be saved) and just used all of the rest as a single ext4 partition. I did have to do an e2label /dev/sdb2 BPI-ROOT to label / appropriately; otherwise I occasionally saw the SD card appear as mmc1 for Linux (I m guessing due to asynchronous boot order with the wifi). You should now find the device boots without intervention.

11 October 2023

Russell Coker: The PineTime

I have just got a PineTime smart watch [1] from Pine64. They cost $US27 each which ended up as $144.63 Australian for three including postage when I ordered on the 16th of September, it s annoying that you can t order more than 3 at a time to reduce postage costs. The Australian online store Kogan has smart watches starting at about $15 [2] with Bluetooth and support for phone notifications so the $48.21 for a PineTime doesn t compare well on just price and features. The watches Kogan sells start getting into high resolution at around the $25 price and many of them have features like 24*7 heart monitoring that the PineTime lacks (it just measures when you request it). No-one would order a PineTime for being cheap or having lots of features, you order it because you want open hardware that allows you to do things your way. Also the PineTime isn t going to be orphaned while it s likely that in a few years most of the cheap watches sold by Kogan etc won t support the new phones running the latest version of Android. The screen of the PineTime is 240*240 resolution (about 260dpi) with 64k colors. The screen resolution is lower than some high-end smart watches but higher than most phones and almost all monitors. I doubt that much benefit could be gained from higher resolution. Even on minimum brightness the screen is easy to read on all but the brightest sunny days. The compute capabilities are 4.5MB of flash storage, 64k of RAM, and a 64MHz CPU this can t run Linux and nothing like it will run Linux for a long time. I ve had the PineTime for 6 days now, I charged it once and it s now at 55% battery. It looks like it will last close to 2 weeks on a single charge and it s claimed that a newer firmware will make the battery last longer. Software The main Android app for using with the PineTime is GadgetBridge which I installed from the f-droid repository. It had lots of click-through menus for allowing access to various Android features (contacts, bluetooth, draw over foreground, location, and more) but after that it was easy to setup. It was the first bluetooth device I ve used which had a 6 digit PIN for connecting to a phone. Initially I used the PineTime with my Huawei Nova 7i [3]. The aim is to eventually have it run from my PinePhonePro but my test of the PinePhonePro didn t go as well as hoped [4]. Now I m using it on my Huawei Mate 10 Pro. It comes with InfiniTime [5] installed as the default firmware, mine had 1.11.0 which is a fairly recent version. I will probably upgrade it soon to get the better power optimisation and weather alerts in the watch face. I don t have any plans to use different watch firmware and I don t have any plans to contribute to firmware development I just can t hack on every FOSS project around it s better to do big contributions to a small number of projects. For people who don t want the default firmware the Wasp-OS project seems interesting as it s written in Python [6], I don t like Python but it s very popular. Python is particularly popular in ML development, it will be interesting to see if Wasp-OS becomes a preferred platform for smart watches that talk to GPT servers. Generally the software works well, one annoyance is that when a notification goes away on the phone it remains on the PineTime and has to be manually dismissed. It would be nice if clearing notifications on the phone would clear them on the PineTime too. The music control works with RocketPlayer on Android, it displays the track name and has options for pause/play and skipping forward and backward one track. Annoyingly the current firmware doesn t allow configuring the main screens, from the primary screen you swipe down for notifications, right for settings, up for menus, and there s nothing defined for swipe left. I d like to make swipe left the command to get to music control. Hardware It has a detachable band that appears to be within the common range of watch bands. According to the PineTime Wiki page [7] there are a selection of alternate bands that will fit it, but some don t because the band is recessed into the watch. It is IP67 rated which means you can probably wear it while swimming. The charging contacts are exposed on the bottom of the case which means that any chemicals left by pool water can be cleaned off and also as they are apparently not expected to be harmed by sweat and skin oil there shouldn t be a problem charging it. I have significant experience using a Samsung Galaxy S5 Mini which is rated at IP67 in swimming pools. I had two problems with the S5 Mini when getting out of the pool, firstly water in the headphone socket made the phone consider that it was in headphone mode and turn off the speakers and secondly it took hours to become dry enough to charge and after many swims the charge rate dropped presumably due to oxide on the contacts. There are reports of success when swimming with a PineTime. Generally it feels well made and appears more solid than the cheapest Kogan devices appear to be. Conclusion If I wanted monitoring for medical reasons then I would choose a different smart watch. I ve read about people doing things like tracking their body stats 24*7 and trying to discover useful things, the PineTime is not a good option for BioHacking type use. However if I did have a need for such things I d probably just buy a second smart watch and have one on each wrist. The PineTime generally works well. It s a pity it has fewer hardware features than closed devices that are cheaper. But having a firmware that can be continually improved by the community is good. The continually expanding use of mobile phone technology devices for custom use in corporations (such as mobile phone in custom case for scanning prices etc in a supermarket) has some potential for use with this. I can imagine someone adding some custom features to a PineTime for such use. When a supermarket chain has 200,000 employees (as Woolworths in Australia does) then paying for a few months of software development work to make a smart watch do specific things for that company could provide significant value. There are probably some business opportunities for FOSS developers to hack on extra hardware on a PineTime and write software to support it. I recommend that everyone who s into FOSS buy one of these. Preferably make a deal with two friends to get the minimum postage cost.

7 October 2023

Louis-Philippe V ronneau: Montreal's Debian & Stuff - "September" 2023

Last Sunday, our local Debian user group gathered to chat, to work on Debian and to do other, non-Debian related hacking. A "Debian & Stuff"! It had been a while since we held a proper meetup. Our last event was the Montreal BSP we organised back in March 2023... We somewhat missed the window for a June meetup and summer events never seem to gather a good crowd, so I didn't try to organise one. All this to say it was nice to see folks from the Montreal Debian community :) This event was also the first time we were hosted by L'Espace des possibles - Petite Patrie, a social venue that aims to provide a space for not-for-profit activities, like repair caf s, sowing classes, board game nights, etc. It was really nice and we will surely meet there again in the future. A group picture during the event Many people came to the event, including some new ones. Although people always tend to come and go during the day, a total of 12 people attended the event. As always, people worked on very different projects! One of the focus of this D&S was assembling AirGradient DIY basic kits. Our local community has been talking a lot about air quality metrics in the past few months1. Tiago thus decided to have a company print the PCBs for this kit and graciously gave away a few spares. Michael then took upon himself to order parts on AliExpress and a few of us ended up soldering the kits together while chatting. An AirGradient DIY basic kit, semi-assembled Otherwise, some Debian work was also done: The whole event was super fun, the tacos we had for lunch were delicious (and very authentic!), and we ended up at a local microbrewery to share a pint later in the evening. Looking forward to the next event!

  1. Mostly as a result of the large forest fires in Canada this summer. I myself blogged twice about air quality-related projects recently.

22 September 2023

Scarlett Gately Moore: KDE: KDE Neon updates! Qt6 transition moving along.

With user edition out the door last week, this week was spent stabilizing unstable! Spent some time sorting out our Calamares installer being quite grumpy which is now fixed by reverting an upstream change. Unstable and developer ISO rebuilt and installable. Spent some time sorting out some issues with using an unreleased appstream ( thanks ximion for help with packagekit! ) KDE applications are starting to switch to Qt6 in master this week, the big one being KDE PIM! This entails an enormous amount of work re-packaging. I have made a dent, sorta. To be continued next week. I fixed our signond / kaccounts line for qt6 which entailed some work on upstream code that uses QStringList.toSet which was removed in Qt6! Always learning new things! I have spent some time working on the KF6 content snap, working with Jarred to make sure his qt6 content snap will work for us. Unfortunately, I do not have much time for this as I must make money to survive, donations help free up time for this  Our new proposal with Kevin s super awesome management company has been submitted and we will hopefully hear back next week. Thanks for stopping by! Till next week. If you can spare some change, consider a donation Thank you! https://gofund.me/b8b69e54

Next.

Previous.